341 research outputs found

    Weak Responses to Auditory Feedback Perturbation during Articulation in Persons Who Stutter: Evidence for Abnormal Auditory-Motor Transformation

    Get PDF
    Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking) functions abnormally in the speech motor systems of persons who stutter (PWS). Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants’ compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [Ξ΅]. The PWS showed compensatory responses that were qualitatively similar to the controls’ and had close-to-normal latencies (~150 ms), but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05). Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands

    Rapid Change in Articulatory Lip Movement Induced by Preceding Auditory Feedback during Production of Bilabial Plosives

    Get PDF
    BACKGROUND: There has been plentiful evidence of kinesthetically induced rapid compensation for unanticipated perturbation in speech articulatory movements. However, the role of auditory information in stabilizing articulation has been little studied except for the control of voice fundamental frequency, voice amplitude and vowel formant frequencies. Although the influence of auditory information on the articulatory control process is evident in unintended speech errors caused by delayed auditory feedback, the direct and immediate effect of auditory alteration on the movements of articulators has not been clarified. METHODOLOGY/PRINCIPAL FINDINGS: This work examined whether temporal changes in the auditory feedback of bilabial plosives immediately affects the subsequent lip movement. We conducted experiments with an auditory feedback alteration system that enabled us to replace or block speech sounds in real time. Participants were asked to produce the syllable /pa/ repeatedly at a constant rate. During the repetition, normal auditory feedback was interrupted, and one of three pre-recorded syllables /pa/, /Ξ¦a/, or /pi/, spoken by the same participant, was presented once at a different timing from the anticipated production onset, while no feedback was presented for subsequent repetitions. Comparisons of the labial distance trajectories under altered and normal feedback conditions indicated that the movement quickened during the short period immediately after the alteration onset, when /pa/ was presented 50 ms before the expected timing. Such change was not significant under other feedback conditions we tested. CONCLUSIONS/SIGNIFICANCE: The earlier articulation rapidly induced by the progressive auditory input suggests that a compensatory mechanism helps to maintain a constant speech rate by detecting errors between the internally predicted and actually provided auditory information associated with self movement. The timing- and context-dependent effects of feedback alteration suggest that the sensory error detection works in a temporally asymmetric window where acoustic features of the syllable to be produced may be coded

    Human Auditory Cortical Activation during Self-Vocalization

    Get PDF
    During speaking, auditory feedback is used to adjust vocalizations. The brain systems mediating this integrative ability have been investigated using a wide range of experimental strategies. In this report we examined how vocalization alters speech-sound processing within auditory cortex by directly recording evoked responses to vocalizations and playback stimuli using intracranial electrodes implanted in neurosurgery patients. Several new findings resulted from these high-resolution invasive recordings in human subjects. Suppressive effects of vocalization were found to occur only within circumscribed areas of auditory cortex. In addition, at a smaller number of sites, the opposite pattern was seen; cortical responses were enhanced during vocalization. This increase in activity was reflected in high gamma power changes, but was not evident in the averaged evoked potential waveforms. These new findings support forward models for vocal control in which efference copies of premotor cortex activity modulate sub-regions of auditory cortex

    Logopenic and nonfluent variants of primary progressive aphasia are differentiated by acoustic measures of speech production

    Get PDF
    Differentiation of logopenic (lvPPA) and nonfluent/agrammatic (nfvPPA) variants of Primary Progressive Aphasia is important yet remains challenging since it hinges on expert based evaluation of speech and language production. In this study acoustic measures of speech in conjunction with voxel-based morphometry were used to determine the success of the measures as an adjunct to diagnosis and to explore the neural basis of apraxia of speech in nfvPPA. Forty-one patients (21 lvPPA, 20 nfvPPA) were recruited from a consecutive sample with suspected frontotemporal dementia. Patients were diagnosed using the current gold-standard of expert perceptual judgment, based on presence/absence of particular speech features during speaking tasks. Seventeen healthy age-matched adults served as controls. MRI scans were available for 11 control and 37 PPA cases; 23 of the PPA cases underwent amyloid ligand PET imaging. Measures, corresponding to perceptual features of apraxia of speech, were periods of silence during reading and relative vowel duration and intensity in polysyllable word repetition. Discriminant function analyses revealed that a measure of relative vowel duration differentiated nfvPPA cases from both control and lvPPA cases (r2 = 0.47) with 88% agreement with expert judgment of presence of apraxia of speech in nfvPPA cases. VBM analysis showed that relative vowel duration covaried with grey matter intensity in areas critical for speech motor planning and programming: precentral gyrus, supplementary motor area and inferior frontal gyrus bilaterally, only affected in the nfvPPA group. This bilateral involvement of frontal speech networks in nfvPPA potentially affects access to compensatory mechanisms involving right hemisphere homologues. Measures of silences during reading also discriminated the PPA and control groups, but did not increase predictive accuracy. Findings suggest that a measure of relative vowel duration from of a polysyllable word repetition task may be sufficient for detecting most cases of apraxia of speech and distinguishing between nfvPPA and lvPPA

    Recognizing Speech in a Novel Accent: The Motor Theory of Speech Perception Reframed

    Get PDF
    The motor theory of speech perception holds that we perceive the speech of another in terms of a motor representation of that speech. However, when we have learned to recognize a foreign accent, it seems plausible that recognition of a word rarely involves reconstruction of the speech gestures of the speaker rather than the listener. To better assess the motor theory and this observation, we proceed in three stages. Part 1 places the motor theory of speech perception in a larger framework based on our earlier models of the adaptive formation of mirror neurons for grasping, and for viewing extensions of that mirror system as part of a larger system for neuro-linguistic processing, augmented by the present consideration of recognizing speech in a novel accent. Part 2 then offers a novel computational model of how a listener comes to understand the speech of someone speaking the listener's native language with a foreign accent. The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener. This, on average, improves the recognition of later words. This model is neutral regarding the nature of the representations it uses (motor vs. auditory). It serve as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture to revisits claims for and against the motor theory of speech perception and the relevance of mirror neurons, and extracts some implications for the reframing of the motor theory

    Genetic Knock-Down of Hdac3 Does Not Modify Disease-Related Phenotypes in a Mouse Model of Huntington's Disease

    Get PDF
    Huntington's disease (HD) is an autosomal dominant progressive neurodegenerative disorder caused by an expansion of a CAG/polyglutamine repeat for which there are no disease modifying treatments. In recent years, transcriptional dysregulation has emerged as a pathogenic process that appears early in disease progression and has been recapitulated across multiple HD models. Altered histone acetylation has been proposed to underlie this transcriptional dysregulation and histone deacetylase (HDAC) inhibitors, such as suberoylanilide hydroxamic acid (SAHA), have been shown to improve polyglutamine-dependent phenotypes in numerous HD models. However potent pan-HDAC inhibitors such as SAHA display toxic side-effects. To better understand the mechanism underlying this potential therapeutic benefit and to dissociate the beneficial and toxic effects of SAHA, we set out to identify the specific HDAC(s) involved in this process. For this purpose, we are exploring the effect of the genetic reduction of specific HDACs on HD-related phenotypes in the R6/2 mouse model of HD. The study presented here focuses on HDAC3, which, as a class I HDAC, is one of the preferred targets of SAHA and is directly involved in histone deacetylation. To evaluate a potential benefit of Hdac3 genetic reduction in R6/2, we generated a mouse carrying a critical deletion in the Hdac3 gene. We confirmed that the complete knock-out of Hdac3 is embryonic lethal. To test the effects of HDAC3 inhibition, we used Hdac3+/βˆ’ heterozygotes to reduce nuclear HDAC3 levels in R6/2 mice. We found that Hdac3 knock-down does not ameliorate physiological or behavioural phenotypes and has no effect on molecular changes including dysregulated transcripts. We conclude that HDAC3 should not be considered as the major mediator of the beneficial effect induced by SAHA and other HDAC inhibitors in HD

    Functional MRI of Auditory Responses in the Zebra Finch Forebrain Reveals a Hierarchical Organisation Based on Signal Strength but Not Selectivity

    Get PDF
    BACKGROUND: Male songbirds learn their songs from an adult tutor when they are young. A network of brain nuclei known as the 'song system' is the likely neural substrate for sensorimotor learning and production of song, but the neural networks involved in processing the auditory feedback signals necessary for song learning and maintenance remain unknown. Determining which regions show preferential responsiveness to the bird's own song (BOS) is of great importance because neurons sensitive to self-generated vocalisations could mediate this auditory feedback process. Neurons in the song nuclei and in a secondary auditory area, the caudal medial mesopallium (CMM), show selective responses to the BOS. The aim of the present study is to investigate the emergence of BOS selectivity within the network of primary auditory sub-regions in the avian pallium. METHODS AND FINDINGS: Using blood oxygen level-dependent (BOLD) fMRI, we investigated neural responsiveness to natural and manipulated self-generated vocalisations and compared the selectivity for BOS and conspecific song in different sub-regions of the thalamo-recipient area Field L. Zebra finch males were exposed to conspecific song, BOS and to synthetic variations on BOS that differed in spectro-temporal and/or modulation phase structure. We found significant differences in the strength of BOLD responses between regions L2a, L2b and CMM, but no inter-stimuli differences within regions. In particular, we have shown that the overall signal strength to song and synthetic variations thereof was different within two sub-regions of Field L2: zone L2a was significantly more activated compared to the adjacent sub-region L2b. CONCLUSIONS: Based on our results we suggest that unlike nuclei in the song system, sub-regions in the primary auditory pallium do not show selectivity for the BOS, but appear to show different levels of activity with exposure to any sound according to their place in the auditory processing stream

    Interactive Language Learning by Robots: The Transition from Babbling to Word Forms

    Get PDF
    The advent of humanoid robots has enabled a new approach to investigating the acquisition of language, and we report on the development of robots able to acquire rudimentary linguistic skills. Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms. We investigate one mechanism among many that may contribute to this process, a key factor being the sensitivity of learners to the statistical distribution of linguistic elements. As well as being necessary for learning word meanings, the acquisition of anchor word forms facilitates the segmentation of an acoustic stream through other mechanisms. In our experiments some salient one-syllable word forms are learnt by a humanoid robot in real-time interactions with naive participants. Words emerge from random syllabic babble through a learning process based on a dialogue between the robot and the human participant, whose speech is perceived by the robot as a stream of phonemes. Numerous ways of representing the speech as syllabic segments are possible. Furthermore, the pronunciation of many words in spontaneous speech is variable. However, in line with research elsewhere, we observe that salient content words are more likely than function words to have consistent canonical representations; thus their relative frequency increases, as does their influence on the learner. Variable pronunciation may contribute to early word form acquisition. The importance of contingent interaction in real-time between teacher and learner is reflected by a reinforcement process, with variable success. The examination of individual cases may be more informative than group results. Nevertheless, word forms are usually produced by the robot after a few minutes of dialogue, employing a simple, real-time, frequency dependent mechanism. This work shows the potential of human-robot interaction systems in studies of the dynamics of early language acquisition

    Pre-Clinical Evaluation of a Replication-Competent Recombinant Adenovirus Serotype 4 Vaccine Expressing Influenza H5 Hemagglutinin

    Get PDF
    Influenza virus remains a significant health and social concern in part because of newly emerging strains, such as avian H5N1 virus. We have developed a prototype H5N1 vaccine using a recombinant, replication-competent Adenovirus serotype 4 (Ad4) vector, derived from the U.S. military Ad4 vaccine strain, to express the hemagglutinin (HA) gene from A/Vietnam/1194/2004 influenza virus (Ad4-H5-Vtn). Our hypothesis is that a mucosally-delivered replicating Ad4-H5-Vtn recombinant vector will be safe and induce protective immunity against H5N1 influenza virus infection and disease pathogenesis.The Ad4-H5-Vtn vaccine was designed with a partial deletion of the E3 region of Ad4 to accommodate the influenza HA gene. Replication and growth kinetics of the vaccine virus in multiple human cell lines indicated that the vaccine virus is attenuated relative to the wild type virus. Expression of the HA transgene in infected cells was documented by flow cytometry, western blot analysis and induction of HA-specific antibody and cellular immune responses in mice. Of particular note, mice immunized intranasally with the Ad4-H5-Vtn vaccine were protected against lethal H5N1 reassortant viral challenge even in the presence of pre-existing immunity to the Ad4 wild type virus.Several non-clinical attributes of this vaccine including safety, induction of HA-specific humoral and cellular immunity, and efficacy were demonstrated using an animal model to support Phase 1 clinical trial evaluation of this new vaccine
    • …
    corecore